专利摘要:
The present invention relates to a method for deriving a motion prediction vector candidate and an apparatus using the method. An image decoding method may comprise the steps of: determining whether a block to be predicted is in contact with a boundary of a largest coding unit (LCU), and determining whether a first composite block is available according to whether the block to be predicted is in contact with a boundary of the largest encoder (LCU). Consequently, unnecessary memory bandwidth can be reduced and implementation complexity reduced (Fig. 6).
公开号:SE1651148A1
申请号:SE1651148
申请日:2012-09-06
公开日:2016-08-25
发明作者:Cheol Kwon Jae;Young Kim Joo;Keun Lee Bae
申请人:Kt Corp;
IPC主号:
专利说明:

jßEsknivNiNoj [Title] METHOD OF CONCLUDING A TEMPORAL PnEoikTioNsFaöFaEtsEsvEk-ToR ooH EN ANonoNiNo so | USE OF THE INVENTION [Field of the Invention] The present invention relates to a method of encoding / decoding video data, and more particularly to a method of deriving a temporal motion prediction vector and an apparatus using the method.
[Prior Art] In recent times, the demand for high-resolution and high-quality video data has increased in different areas of application. As video data gets higher resolution and higher quality, so does the amount of information associated with video data. Consequently, when video data is transmitted using existing wired and wireless broadband connections or stored using conventional storage methods, the cost of its transmission and storage increases. Problems that arise when dealing with high-resolution and high-quality video data can be addressed with the help of highly efficient video data compression techniques.
A number of different compression techniques for video data have been introduced, for example the interpreting technique which predicts pixel values contained in the current image from a previous or subsequent image, the intraprediction technique which predicts pixel values contained in the current image from a current image based on a pixel information shorter codewords to a value that occurs more often, while a longer codeword is assigned a value that occurs more frequently. Such video data compression techniques are used to efficiently compress, transfer or store video data.
[Technical Problem] 2 An object of the present invention is to provide a method for deriving a temporal motion prediction vector for a block adjacent to a boundary belonging to a largest coding unit (LCU).
A second object of the present invention is to provide an apparatus for performing the method of deriving a temporal motion prediction vector for a block adjacent to a boundary belonging to a largest coding unit (LCU).
[Technical Solution] To achieve the former object and in accordance with an aspect of the present invention, there is provided a video data decoding method, which method comprises the steps of determining a reference image index of a composite block of a prediction target block; and determining a motion prediction vector of the composite block, the block being a block determined adaptively using a location of the prediction target block in a largest coding unit (LCU). The composite block can be determined by determining whether a lower boundary of the prediction target block is adjacent to a boundary belonging to a largest coding unit (LCU). The composite block can be determined in another way by determining whether a lower boundary of the prediction target block borders on a boundary belonging to the LCU and whether only a right boundary of the prediction target block borders on a boundary belonging to the largest coding unit (LCU). The compilation block can be determined by presetting pixel positions in the largest coding unit (LCU). If a left or lower bound of the prediction flour block does not border the boundary of the largest coding unit (LCU), then a first composite block and a fifth composite block are determined sequentially as the composite block according to an availability of the compiled block on a corresponding position.
To achieve the second object and in accordance with an aspect of the present invention, there is provided a video data decoding method, which method may comprise the steps of determining whether a boundary of a prediction target block borders on a boundary of a largest coding unit (LCU), and determining an availability of a first composite block in accordance with what determines whether the boundary of the prediction target block is adjacent to the boundary belonging to the largest coding unit (LCU). The video data decoding method may further comprise, if it has been determined that the first composite block is not available, determining a composite block other than the first composite block as a composite block to derive a temporal prediction motion vector. The step of, if the first composite block is not available, determining a composite block other than the first composite block as the composite block to derive the temporal prediction motion vector is a step of determining different composite blocks to derive the temporal prediction motion vector in a case where a lower limit of the prediction target block boundaries to the boundary belonging to the largest coding unit (LCU) and in a case only a right boundary of the prediction target block borders to the boundary belonging to the largest coding unit (LCU). The step of determining the availability of the first compiled block in accordance with what has been determined regarding whether the boundary of the prediction target block borders on the boundary of the largest coding unit (LCU) is a step of determining the first compilation block as unavailable if a lower limit of the prediction target block adjacent to the boundary to the largest encoder (LCU). The method may also include the step of, if the first compiled block is available, determining the first compiled block as the compiled block to derive the temporal prediction motion vector, or, if the first compiled block is not available, determining the availability of the fifth compiled block.
In order to achieve the third object and in accordance with an aspect of the present invention, there is provided a video data decoding apparatus comprising an entropy decoding unit which decodes information corresponding to the largest coding unit (LCU), and a prediction unit that determines a reference image index of a composite block of a prediction target block, and determines a motion prediction vector of the composite block, wherein the composite block is a block that is determined adaptively by a placement of the prediction target block in a largest coding unit (LCU). The block can be determined in another way by determining whether a lower limit of the prediction target block borders on a boundary belonging to a largest coding unit (LCU). The block can be determined in another way by determining whether a lower limit of the prediction target block borders on a boundary belonging to the LCU and if only a right boundary of the prediction target block borders on a boundary belonging to the largest coding unit (LCU). The composite block can be determined by the pixel positions in the largest coding unit (LCU). Although the left or lower boundary of the prediction target block does not adjoin the boundary associated with the largest coding unit (LCU), a first composite block and a fifth composite block are determined sequentially as the compilation block according to an availability of the composite block on a corresponding position.
To achieve the fourth object, and in accordance with one aspect of the present invention, there is provided a video data decoding unit which may include an entropy decoding unit which decodes information corresponding to the largest coding unit (LCU), and a prediction unit. which determines whether a boundary of a prediction target block borders on a boundary belonging to the largest coding unit (LCU) and determines an availability of a first composite block in accordance with what is determined as to whether the boundary of the prediction target block borders on the boundary belonging to the largest coding unit ( LCU). The prediction unit may, if it has been determined that the first composite block is not available, determine a composite block other than the first composite block as a composite block to derive a temporal prediction motion vector. The prediction unit may determine different composite blocks to derive the temporal prediction motion vector in a case where the lower boundary of the prediction target block is adjacent to the boundary belonging to the largest coding unit (LCU) and in a case where only a right boundary of the prediction target block adjoins the boundary. largest coding unit (LCU). The prediction unit may determine the first composite block as unavailable if a lower limit of the prediction target block is adjacent to the boundary belonging to the largest coding unit (LCU). The prediction unit may, if the first compiled block is available, determine the first compiled block as the compiled block to derive the temporal prediction motion vector, or, if the first compiled block is not available, determine the availability of the fifth compiled block.
[Positive Effects] As described above and in accordance with an embodiment of the present invention, a method for deriving a temporal motion prediction vector and an apparatus using the method in different ways may use a composite image from which a temporal motion vector is derived depending on prediction. the target block borders on one largest coding unit (LCU). By using the method, the memory bandwidth that is unnecessarily used to derive a temporal motion vector can be reduced and implementation complexity can be minimized.
[Description of Figures] FIG. 1 is a block diagram illustrating an apparatus for encoding video data in accordance with an embodiment of the present invention.
Fig. 2 is a block diagram illustrating a decoder of video data in accordance with another embodiment of the present invention.
Fig. 3 schematically illustrates a method for deriving a temporal prediction motion vector in accordance with an embodiment of the present invention.
Fig. 4 is a flow chart illustrating a method for deriving a temporal prediction motion vector in accordance with an embodiment of the present invention. Fig. 5 schematically illustrates a position of an assembled block for deriving a temporal motion vector in accordance with an embodiment of the present invention.
Fig. 6 schematically illustrates a method of determining a composite block for deriving a motion prediction vector in accordance with an embodiment of the present invention.
Fig. 7 schematically illustrates a case where a prediction target block adjoins a lower limit belonging to a largest coding unit (LCU) in accordance with an embodiment of the present invention.
Fig. 8 is a flow chart illustrating an interpretation method that the user co-sorting mode in accordance with an embodiment of the present invention.
Fig. 9 schematically illustrates placement of spatial co-sorting candidates in accordance with an embodiment of the present invention.
Fig. 10 is a flow chart illustrating a prediction method using A1 / IVP in accordance with an embodiment of the present invention.
[Description of the Embodiments of the Invention] The present invention may be modified in various ways and the present invention may have a number of embodiments. Specific embodiments have been described in detail with reference to the figures. However, the present invention is not limited to specific embodiments, and it is to be understood that the present invention includes all modifications, equivalents, or substitutions that fall within the spirit and scope of the present invention. The same reference numerals can be used in the different modules once the figures have been explained. 7 The terms “first” and “second” can be used to describe different components (or features). However, the components are not limited thereto. These terms are used only to distinguish the components from each other. For example, the first component may also be called the second component, and the second component may similarly be called the first component. The term "and / or" encompasses a combination of a plurality of related objects as described herein or any of the number of related objects.
When one component (or feature) is "connected" or "connected" to another component, the component may be directly connected or connected to the other component. On the other hand, when a component is “directly connected or connected” to another component, there are no additional components in between.
Concepts used herein are used to describe the embodiments, but by way of limitation of the present invention. A concept in the singular includes the concept in the plural unless otherwise clearly stated. As used herein, terms such as "include" or "have" are used to indicate that there are features, numbers, steps, operations, components, parts or combinations of the foregoing terms described herein, but without excluding the occurrence or ability to add added or more features speech, steps, operations, components, parts or combinations of previous concepts.
In the following, typical embodiments of the present invention will be described in detail and with reference to the accompanying drawings. A reference numeral refers to one and the same element in all the drawings and redundant description of one and the same element in different drawings will not be included.
Fig. 1 is a block diagram illustrating an apparatus for encoding video data in accordance with an embodiment of the present invention. Referring to Fig. 1, video data encoder 100 may include a single image splitting module 110, an interpredication module 120, an intraprediction module 125, a transform module 130, a quantization module 135, a rearrangement module 160, an entropy encoding module 165, a dequantizer module, inverse transform module 145, a filter module 150 and a memory 155. In Fig. 1 each module is shown independently of the others to represent the different functions of the video data decoder, but this does not mean that each module should be implemented by means of a dedicated hardware or software module unit. That is, in order to conveniently present the description, the modules provided are shown independently, and at least two of the modules can be combined to form a single module, or one of the modules can be divided into a plurality of modules that perform functions. . The embodiments with merged or separated modules are also encompassed by the present invention without departing from the spirit of the present invention.
In addition, some of the modules are not essential modules that perform essential functions belonging to the present invention, but are optional modules for improving performance. The present invention can only cover the essential modules necessary to implement the spirit of the present invention, and does not encompass the modules used solely for performance improvement. This structure is also covered by the scope of the present invention.
The image splitting module 110 can split an incoming image into at least one processing unit (PU). The process unit may be a prediction unit (PU), a transformation unit (TU) and a coding unit (CU). The image splitting module 110 can encode the image by dividing an image into a combination of a plurality of coding units, prediction units and transform units, and a combination with a coding unit, a prediction unit and a transformation unit can be selected according to a predetermined criterion such as a cost function which can be coded .
For example, an image can be divided using a plurality of coding units. Encursive tree structure, such as quad tree structure, can be used to divide the image into coding units. If the image or a largest coding unit is a root, the image can be divided into other coding units with as many subnodes as the number of coding units. The coding unit that cannot be further subdivided due to a predetermined constraint should be an external node. It is then assumed that only square divisions are available for one coding unit and that the coding unit can only be divided into a further maximum of four coding units. In the following, in the embodiments of the present invention, the coding unit may mean a unit in which both coding and decoding are performed.
A prediction unit can be divided by means of a shape which is at least one square or rectangle of the same size as the coding unit.
After a prediction unit in which intra-prediction is performed by means of a single coding unit is generated, if the coding unit is not a minimum coding unit, then intra-prediction can be performed without dividing the prediction unit into a plurality of NxN prediction units.
A prediction module may comprise an interpredication module 120 which performs an interprediction and an intra-prediction module 125 which performs an intra-prediction. It can be determined whether an interprediction or an intraprediction is to be performed with respect to the prediction unit, and in accordance with each prediction method, specific information (eg intra-prediction mode, motion vector, reference image, etc.) can be a process unit on which the prediction is performed may differ from a process unit. on which the method of prediction and its details are determined. A residual value (residual block) between a generated prediction block and an original block can be input to the transformation module 130. In addition, prediction mode information, motion vector information, and so on. which together with the residual value used for the prediction can be coded in an entropy coding module 165 and can then be transferred to a decoding device. If a specific coding mode is used then, rather than generating the prediction block by means of the prediction modules 120 and 125, then the original prediction block can be coded and transferred to the decoding device. The intra-prediction module can predict a prediction unit based on information-applying at least one image of all the images either before the current image or after the current image. The intra-prediction module may comprise a reference image interpolation module, a motion prediction module and a motion compensation module. The reference image interpolation module may receive reference image information from the memory155 and may generate pixel information in the unit smaller than an integer pixel unit within the reference image. In the case of luminescence pixels, a DCT-based, eight-point interpolation filter with different filter coefficients for each point can be used to generate the pixel information in the unit which is smaller than the integer pixel unit, the unit which is a quarter pixel. In the case of chrominance pixels, a DCT-based, four-point interpolation filter with different filter coefficients for each point can be used to generate the pixel information in the unit which is smaller than an integer pixel unit, the unit which is a octave pixel.
A motion prediction module can perform motion prediction based on a reference image interpolated with the reference image interpolation module_ To derive a motion motion vector, various methods, such as FB1 / IA (full-search-based block-matching algorithm), TSS (the three-step sales search,) can be used. . The motion vector may have a motion vector value in a half pixel unit or in a quarter pixel unit based on an interpolated pixel. The motion prediction module can predict a current prediction unit by applying different motion prediction methods. Applicable motion prediction methods, different methods, such as the ship method, the merge method or an Al / IVP method (advanced motion vector prediction) can be used. According to an embodiment of the present invention, the prediction module can determine whether a boundary belonging to a prediction target block borders on a boundary belonging to an LCU (largest coding unit) and it can determine whether a first assembled block is available in accordance with what has been determined. regarding whether the boundary of the prediction target block borders on the boundary belonging to the largest coding unit (LCU). For example, when the first composite block is not available, a second composite block can be determined as a composite block to derive a temporal prediction motion vector. Or, in the case where the first composite block is not available, a position of the first composite block is changed and the position-changed, first composite block can be determined as a composite block to derive a temporal prediction motion vector.
In addition, the prediction module may comprise a prediction module which determines a reference image index of a compiled block of a prediction target block and to determine a motion prediction vector of the compiled block. The composite block is a block that is determined adaptively by single placement of the prediction target block in the largest coding unit (LCU). In the following, the operation of the prediction module in accordance with the present invention will be described in detail. the prediction module can generate a prediction unit based on the information of a reference pixel adjacent to the current block, which is the pixel information of the pixels in the current image. In the case where the block adjacent to the current prediction unit is a block to which the interpredication is applied, and the reference pixel is thus a pixel through the interpredication, the reference pixel included in the block to which the interprediction is applied can be replaced by using the reference pixel information of a block prediction applied.
In the case of intra-prediction, prediction modes may include direction-dependent prediction modes in which reference pixel information is used according to the prediction direction and non-direction-dependent modes in which, after the prediction, no directional information is used. A mode for predicting luminescence information may be different from a mode for predicting chrominance information. In addition, information about an intra-prediction mode in which luminescence information has been predicted or predicted luminescence information can be used to predict chrominance information.
After the intra-prediction is performed, and if the size of a prediction unit is the same as the size of a transform unit, then the intra-prediction is performed based on the pixels located on the left side of the prediction unit, a pixel placed at the top left relative to the prediction unit and pixels placed at the top prediction unit. However, after the intraprediction is performed, and if the size of a prediction unit differs from the size of a transform unit, then the intraprediction is performed using the reference pixels based on the transformer unit. In addition, and only for a minimum coding unit, the intraprediction can be performed using NxN division. In the intra-prediction method, a prediction block can be generated after an ID / filter (mode-dependent intra-smoothing) is applied to reference pixels in accordance with the prediction mode. Different types of l / IDIS filters can be applied to the reference pixels. To perform the intra-prediction method, an intra-prediction method of the current prediction unit can also be predicted based on the intra-prediction mode of a neighboring prediction unit to the current prediction unit. In case the prediction mode of the current prediction unit is predicted by the mode information predicted from the neighboring prediction unit, if the intra-prediction mode of the current prediction unit is the same as the intra-prediction mode of the neighboring prediction unit, predetermined indicator information can be used to transmit the prediction information. . And if the prediction mode of the current prediction unit differs from the prediction mode of the neighboring prediction unit, then an entropy coding can be performed to encode the prediction mode information of the current block. In addition, a residual block can be derived, which block comprises information about a residual value which is a differentiated value between an original block of a prediction unit and a prediction unit on which prediction is performed based on the prediction unit generated in the prediction modules 120 and 125. The derivative residual block may be input to the transformation module 130. The transformation module 130 may transform the residual block by means of a transformation method, such as discrete cosine transform (DCT) or discrete sine transform (DST). The residual block comprises residual information between the prediction unit generated by the prediction module 120 and 125 and the original block. Whether DCTeller DST should be applied to transform the residual block can be determined based on intra-prediction mode information of the prediction unit used to generate the residual block.
The quantization module 135 can quantize values transformed to the frequency domain by means of the transformation module 130. A quantization parameter may vary depending on the importance of a block or image. A value produced in the quantization module 135 may be provided to the quantization module 140 and the rearrangement module 160.
The rearrangement module 160 can perform rearrangement of coefficients for the quantized residual value.
The rearrangement module 160 can change two-dimensional block, shaped coefficients to one-dimensional vector shape by a coefficient scanning method. For example, the rearrangement module 160 may use a diagonal scan method to scan from DC coefficients to high frequency coefficients, thereby arranging 2D, block-shaped coefficients into a 1D, vector shape. Depending on the size of the transformer and the intraprediction, instead of the diagonal scanning method, one can use a vertical scanning method where 2D, block-shaped coefficients are scanned in a column direction or a horizontal scanning method where 2D, block-shaped coefficients are scanned in a row direction. In other words, one of the methods of diagonal, vertical or horizontal scanning can be used depending on the size of the transformation unit and the intrapredictive mode.
The entropy coding module 165 can perform an entropy coding based on values produced in the rearrangement module 160. Various coding methods, such as exponential Golomb and CABAC, can be used for the entropy coding.
The entropy coding module 165 can encode various information such as residual coefficient information, and byte type information associated with the coding unit, prediction mode information, subdivision unit information, motion vector information, reference frame information, interpolation information for a block, filtering information and information on the size of LCUs. 160 and the prediction module 120, 125.
The entropy coding module 165 can perform an entropy coding based on coefficient values of the coding unit as input from the rearrangement module 160 and by using an entropy coding method, such as CABAC.
The quantization module 140 can perform quantization of values quantized in the quantization module 135 and the inverse transform module 145 can perform inverse transform on values transformed in the transform module 130. The values generated in the quantization module 140 and the inversion module of the inverse transform module may be added. intrap prediction module included in the prediction module 120, 125, thereby generating a recovered block.
A filter module 150 may comprise at least one of an unblocking filter, offset correction module and an ALF (adaptive loop filter).
An unblocking filter can remove a block distortion that occurs due to a block boundary in the created (or restored) image. Whether the unblocking filter should be applied to a current block can be determined using a pixel that is included in several rows or columns of blocks. If the unblocking filter is applied to the block, either a strong or a weak filter can be applied in accordance with the necessary strength of the unblocking filter. In addition, if the unblocking filter is applied to the block, a horizontal or a vertical filtering can be performed in parallel.
An offset correction module can correct an offset between an original image and a single image where unblocking has been applied in a pixel drive. In order to perform offset correction on a specific image, the pixels included in the image are divided into a predetermined number of surfaces, one of which is determined to perform an offset, and the safe method of applying offset to the corresponding surface or method to apply an offset taking into account edge information of each pixel is used.
An ALF (adaptive loop filter) filter can perform a filtering based on a value that has been poured by comparing a filtered, recreated (or restored) image with the original. Pixels included in an image are divided into predetermined groups and a filter to be applied to the corresponding group is determined to perform a discriminating filtering of each group. Current information on whether an ALF filter should be applied, a luminescence signal can be transmitted for each coding unit and the size and coefficient of the ALF filter can vary for each block. The ALF filter can have similar shapes and the number of coefficients included in the filter can vary accordingly. Filtration-related information of these ALF filters (eg coefficient information, on / off information, filter shape) can be transmitted included in a predetermined set of parameters in the bitstream. The linen 155 can store recovered blocks or image generated by the filtering module 150, and the stored, recovered block or image can be provided to the prediction module 120, 125 when interpreting is performed. Fig. 2 is a block diagram illustrating a decoder of video data in accordance with another embodiment of the present invention.
Referring to Fig. 2, the video data decoder may include an entropy decoding module 210, a rearrangement module 215, a dequantization module 220, an inverse transform module 225, a prediction module 230 and 235, a filtering module 240 and a memory 245. If the video data data stream is Incoming bitstream can be decoded in a procedure opposite to the procedure in a video data encoder.
The entropy decoding module 210 may perform an entropy decoding in a procedure opposite to the entropy encoding procedure performed in the entropy encoding module of the video data encoder. Information bits decoded in the entropy decoding module 210 and used to derive a prediction block, such as size information regarding the LCU or block provided to the prediction modules 230 and 235 and residual values derived by the entropy decoding module 21, may be the decoding module 21.
The entropy decoding module 210 can decode information related to one-in-one prediction and one inter-prediction performed in the decoder. As described above, if there is a predetermined constraint when the video decoder performs the intra-prediction and the interpredication, then the entropy decoding is performed based on such a constraint thereby receiving information regarding the intra-prediction and the interpredication of the current block. The rearrangement module 215 can perform rearrangement based on an encoder method to rearrange one-bit stream that is entropy-decoded in the entropy-decoding module 210. Such an rearrangement can be performed by recreating the coefficients represented by coefficients of 1D vectors and 2D coefficients. The quantization module 220 can perform a quantization based on the block of off-arranged coefficients and quantization parameters provided to the decoder.
The inverse transform module 225 can perform an inverse DCT and an inverse DST, with respect to the DCT and DST performed in the transformation module, on a result of the quantization performed in the video encoder. The inverse transform can be performed based on a transmission unit determined in the video data encoder. The transformed module of the video data encoder can selectively perform DCT and DST depending on a plurality of information, such as prediction method, a size of the current block and a prediction direction, and the inverse transform module 225 of the video data decoder can perform an inverse transform based on the information performed by the transformation module of the video data encoder.
The prediction module 230, 235 may generate a prediction block based on the previously decoded block or on the previously decoded image information provided by the memory 245 and generation related information from the prediction block provided by the entropy decoding module 210.
The prediction module 230, 235 may comprise a module determining prediction unit, an interprediction module and an intra-prediction module. The l / module that determines the prediction unit can receive various information including information about the prediction mode of an intra-prediction method, motion prediction-related information of an inter-prediction method and information about the prediction unit, and various information is input from the entropy decoding module. The I / O which determines the prediction unit can separate a prediction unit from a current coding unit and can determine whether an intra-prediction or an inter-prediction is performed in the prediction unit. Information on the prediction module can perform one-prediction in the current prediction unit in accordance with information included in at least one of all the images before the current image or after the current image. The prediction module can perform the prediction in the current prediction unit by using information necessary for interpreting the current prediction unit provided from the video data decoder.
It can be determined which of ship mode, merge mode or Al / IVP mode is a method of motion prediction for a prediction unit included in a corresponding coding unit, based on a coding unit, for performing interpredication.
According to an embodiment of the present invention, the interpreting module can determine whether a boundary belonging to a prediction target block borders on a boundary belonging to an LCU (largest coding unit) and it can determine whether a first assembled block is available in accordance with what has been determined. whether the boundary of the prediction target block borders on the boundary belonging to the largest coding unit (LCU). For example, when the first composite block is not available, a second composite block can be determined as a composite block to derive a temporal prediction motion vector. Or, in the case where the first composite block is not available, a position of the first composite block is changed and the position-changed, first composite block can be determined as a compilation block to derive a temporal prediction motion vector.
In addition, the prediction module may comprise a prediction module which determines a reference image index of a compiled block of a prediction target block and to determine a motion prediction vector of the compiled block. The composite block is a block that is determined adaptively by means of a single location of the prediction target block in the largest coding unit (LCU). In the following, the operation of the prediction module in accordance with the present invention will be described in detail. The interpreting module can generate a prediction block based on the pixel information in the current image. In the case where the prediction unit is the one to which the intra-prediction is applied, the intra-prediction can be performed based on information about 19 intra-prediction modes of the prediction unit provided by the video data encoder. The intra-prediction module may comprise an MDIS filter, a reference pixel and a single-dipole filter. The / IDIS filter performs a filtering on reference pixels of the current block. For the l / IDIS filter, it can be decided whether the filter should be applied according to the prediction mode of the current prediction unit. Filtering on reference pixels of the current block can be performed using information about l / lDlS filters and prediction mode of the prediction unit provided by the video data encoder. If the prediction mode of the current block is a mode in which filtering is not performed then l / IDIS filters cannot be applied. If the prediction mode of the prediction unit is a prediction mode in which the intra-prediction is performed based on pixel values obtained by the interpolator reference pixel, then the reference pixel having a unit less than one integer pixel can be derived by interpolating the reference pixels. If the prediction mode of the prediction unit is a prediction mode in which the prediction block is generated without interpolation of the reference pixels, then the reference pixel cannot be subjected to interpolation. The DC filter can generate a prediction block by filtering if the prediction mode of the current block is a DC mode.
The recovered block or image may be provided to the filtering module 240. The filtering module 240 may comprise an unblocking filter, an offset correction module and an ALF filter.
Information on whether the corresponding block or image has been applied with an unblocking filter can be provided from a video data encoder (or image data encoder). If the unblocking filter is applied, information regarding whether the applied filter is a strong or a weak filter can be provided from the video data encoder. The unblocking filter of the video data encoder can receive information related to unblocking filters from the video data encoder and the unblocking filtering can be performed on the corresponding block of the video data decoder. As with the video data encoder, the video data decoder can first perform a vertical deblock filtering and a horizontal deblock filtering. An overlapping portion can be subjected to at least one of vertical unblocking and horizontal unblocking. In the area where vertical unblocking filtering and horizontal unblocking filtering overlap, one of the vertical unblocking filters and the horizontal unblocking filtering not previously performed can be performed in this area. This unblocking filtering process enables a parallel processing of the unblocking filtering_ An offset correction module can correct an offset on the reconstructed image based on a type of offset correction applied to the image during coding and information on offset values applied in the coding process.
An ALF filter can perform a filtering according to a comparison between the regenerated image after the filtering and the original image. The ALF filter can be implemented in a coding unit based on information on whether the ALF filter has been applied as well as coefficient information, which information is provided by the encoder. ALF-related information can then be provided included in a specific set of parameters. The line 245 may store a recovered image or a recovered block as for use as a reference image or a reference block, and the recovered block or the recovered image may be provided to a display module.
As described above, although the term "coding unit" is used in the embodiment of the present invention to conveniently present the description, the coding unit may also be used as a decoding unit. In the following, the prediction method described below in connection with FIG. 3-11 in accordance with an embodiment of the present invention are embodied in a component, such as a prediction module shown in Figs. 1 and 2.
Fig. 3 schematically illustrates a method for deriving a temporal prediction motion vector in accordance with an embodiment of the present invention. Referring to Fig. 3, the temporal prediction motion vector can be derived based on a motion vector value of a composite block (colPu) in a composite image (colPic).
The composite image is an image comprising a composite block for deriving information related to temporal prediction motion after the interprediction method, such as merging or Al / IVP, has been performed. The compiled block can be defined as a block included in the compiled image and the compiled block is derived based on location information of a prediction target block and has a different time phase than the prediction target block.
There can be several compiled blocks for a prediction target block. Motion-related information of the composite block included in the composite image can be stored as a representative value with respect to a predetermined unit. For example, and with respect to a unit with the block size 16x16, information related to motion prediction (motion vector, reference image, etc.) can be determined and stored as a representative value in a 16x16 block unit.
Fig. 4 is a flow chart illustrating a method for deriving a temporal prediction motion vector in accordance with an embodiment of the present invention. In the following, a method for deriving a temporal prediction motion vector, and which will be described below, will be used in an interprediction method such as merge mode or AlVlVP mode. The method for deriving an entemporal prediction motion vector may be a method for deriving a temporal candidate block (composite block) for performing AlVlVP mode, and a method for deriving a temporal prediction motion vector. In the following, in one embodiment of the present invention, the composite block may be defined and used as a term indicating a temporal candidate block used in merge mode and AlV1VP mode. Referring to Fig. 4, the compiled image information is derived (step S400).
Position, size and reference image index information regarding the prediction target block can be used to derive composite image information, compiled block information and temporal prediction motion vector.
According to an embodiment of the present invention, the composite image information can be derived based on subset type information (slice_type), reference image list information (collocated_from_10_flag) and reference image index information (collocated_ref_idx). Based on the reference image list information (collocated_from_10_flag) and if the reference image list information (collocated_from_10_flag) indicates 1, it represents that the compiled image is included in a first reference image list (List O), and if the reference image list information the image is included in a second reference image list (List 1).
For example, if the subset type is subset B and a value of reference image list information (collocated_from_10_flag) is 0, then the composite image can be derived as an image included in the second reference image list and if the subset type is subset B and a value of reference image information ( colloca-ted_from_10_flag) is 1 so the composite image can be derived as an image included in the first reference image list. If an interpretation method uses a merge mode, if a predetermined condition is met, then reference image index information of a neighbor block in a specific position can be determined as information for the compiled image and 23 if a predetermined condition is not met, a previous image of a current image can be determined. as a composite image. Information for compiled blocks is derived (step S410). Information for compiled blocks can be derived in different ways depending on which part (or portion) of a prediction target block borders on a boundary belonging to an LCU. In the following, a method for determining a composite block dependence on the location of a prediction target block and the boundary belonging to an LCU will be described with reference to Figs. 5-9.
Fig. 5 schematically illustrates a position of an assembled block for deriving a temporal motion vector in accordance with an embodiment of the present invention. Referring to Fig. 5, blockages in different positions with respect to one prediction target block can be used as assembled blocks to derive entemporal motion vector. The composite blocks that can be used to derive the temporal motion vector can be classified according to their location as follows: In the case where a location of a point of the prediction target block that is at the top and left is (xP, yP), a width of the prediction target block is nPSW, and a height prediction target block is nPSH, then a first composite block 500 may be a block comprising a point i (xP + nPSW, yP + nPSH) in the composite image, a second composite block 510 may be a block comprising a point i (xP + nPSW-l / linPuSize, yP + nPSH) in the composite image, a third composite block 520 may be a block comprising a point i (xP + nPSW, yP + nPSH-NlinPuSize) in the composite image. image, a fourth composite block 530 may be a block comprising a point i (xP + nPSW-1, yP + nPSH-f) in the composite image, a fifth composite block 540 is a block comprising a point i (xP + (nPSW >> 1) , yP + (nPSH >> 1)) in the assembled car it and a 24 sixth composite block 550 is a block comprising a point i (xP + (nPSW >> 1) -1), yP + (nPSH >> 1) -1) in the composite image.
The composite block can be determined adaptively according to a position of a current prediction target block located in the largest coding unit (LCU). A positional relationship between a prediction target block and a boundary belonging to an LCU can be categorized in the following cases: 1) a lower end and a right side of the prediction target block do not border the LCU boundary, 2) only the lower end of the prediction target block borders on The LCU boundary, 3) both the lower end and the right side of the prediction target block adjoin the LCU boundary, 4) only the right side of the prediction target block adjoins the LCU boundary.
According to an embodiment of the present invention, the composite block can be adaptively determined in another way depending on the location of the prediction target block in the largest coding unit (LCU). 1) The lower end and the right side of the prediction target block do not border the LCU limit, so the first composite block and the fifth composite block can be used sequentially as a composite block with an availability control to derive a temporal motion vector. 2) Only the lower end of the prediction target block adjoins the LCU boundary since the third composite block and the fifth composite block are used sequentially as a composite block with an availability control to derive a temporal motion vector. 3) Both the lower end and the right side of the prediction target block adjoin the LCU boundary, so the fourth composite block and the fifth composite block can be used sequentially as a composite block with an availability control to derive a temporal motion vector. 4) Only the right side of the prediction target block adjoins the LCU boundary so-called second composite block and the fifth composite block are used sequentially as a composite block with an accessibility check to derive a temporal motion vector.
That is, according to an embodiment of the present invention and depending on the placement of the current block in the LCU, a temporal candidate block can be determined adaptively. The pixel positions to specify a temporal candidate block that the lower end of the current target block borders on the LCU boundary may be different from the pixel positions to specify a temporal candidate block where the lower end of the current target block does not border the LCU boundary. The pixel positions to specify a temporal candidate block where the lower end of the current target block borders the LCU boundary may differ from the pixel positions to specify a temporal candidate block where only the right boundary of the current target block borders the LCU boundary.
According to another embodiment of the present invention, a method may be used in which method a composite block may be determined (or selected) adaptively and in different ways depending on the position of the prediction target block in the LCU so that the composite block and the prediction target block are located in the same LCU. or the compiled block may not be used if the compiled block and the prediction target block are not located in the same LCU.
Fig. 6 schematically illustrates a method of determining a composite block for deriving a motion prediction vector in accordance with an embodiment of the present invention. Referring to Fig. 6, the positions of assembled blocks of a plurality of prediction units included in an LCU may be known. In the case of PUO, PU1, PU2 and PU5, the prediction units are prediction units within the LCU and the first composite block can first be used as a composite read to derive a temporal motion vector.
In the case of PU4 and PU7, the boundaries of the prediction units border only on the lower LCU boundary and the third composite block can first be used as a composite block to derive a temporal motion vector.
In the case of PU8, the boundaries of the prediction units border both the lower and the right LCU boundary, and the fourth composite block can first be used as a composite block to derive a temporal motion vector.
In the case of PU3 and PU6, the boundaries of the prediction units are limited only to the right LCU boundary and the second composite block can first be used as a composite block to derive a temporal motion vector.
Ie. and as described above, a temporal candidate block can be determined adaptively depending on the location of the current block in the LCU and the pixel positions to specify a temporal candidate block for cases where the lower end of the current target block is adjacent to the LCU boundary (PU4, PU7 and PU8). and in case the lower end of the current target block does not border the LCU limit (PUO, PU1, PU2, PU3, PU5 and PU6) are different. In addition, the pixel positions for specifying a temporal candidate block are for cases where the lower end of the current target block borders the LCU boundary (PU4, PU7 and PU8) and for cases where only the right end of the current target block borders the LCU boundary (PU3 and PU6). being different.
According to another embodiment of the present invention, the LCU can be determined adaptively and otherwise depending on the location of the prediction target block in the LCU as long as a composite block is located within the same LCU together with the prediction target block. If a specifically compiled block is not located within the same LCU together with the prediction target block so 27, it is possible that such a specifically compiled block is not available. For example, if the lower end of prediction blocks is adjacent to the LCU boundary such as PU4, PU7 and PU8 then the first composite block can be marked (or indicated) as unavailable and the fifth composite block can be used instead as a composite block for to derive a temporal prediction vector.
As a method for deriving a composite block, the method can be used, which method categorizes features of a prediction target block as described above depending on the location of the prediction target block and the LCU boundary, and selects a block to be used as a composite block depending on the categorized location of the prediction target block. It is preferably assumed that the first assembled block and the fifth assembled block can be used sequentially as a assembled block to derive a temporal motion vector. After the first composite block has been checked for availability (e.g., whether the lower boundary of the prediction target block is adjacent to the LCU), a composite block other than the first composite block is determined to remain a composite block to derive a temporal motion vector. For example, if it is determined that the first composite block is not available using the steps of determining whether the prediction target block is adjacent to the LCU boundary, then the composite block can be changed to derive a temporal motion vector to another compiled block (e.g., third compiled block), otherwise the fifth composite block can be used directly without using the first composite block.
Specifically, the above method can be performed by executing the following steps: 1) The step of determining whether a boundary of a prediction target block borders on a boundary belonging to a largest coding unit (LCU). 2) The step of determining an availability of a first composite block in accordance with what has been determined regarding whether the boundary of the prediction target block boundaries to the boundary belonging to the largest coding unit (LCU). Specifically, in step 28, when the lower limit of the prediction target block is adjacent to the LCU, it is determined that the first assembled block is not available. 3) The step of determining a composite block other than the first compilation block as a composite block to derive a temporal prediction motion vector when the first compilation block is not available. Specifically in step 2, when the lower limit of the prediction target block is adjacent to the boundary belonging to the largest coding unit (LCU) and in a case where only the right boundary of the prediction target block is adjacent to the boundary belonging to the largest coding unit (LCU) then one can, in each case , determining different assembled blocks to derive a temporal prediction motion vector. 4) The step of determining the first composite block as a composite block to derive a temporal prediction motion vector if the first composite block is available, determining the availability of the fifth composite block if the first composite block is not available.
The above steps may be optional and the sequential relationship of the method steps may be changed without departing from the spirit of the present invention.
Fig. 7 schematically illustrates a case where a prediction target block adjoins a lower limit belonging to a largest coding unit (LCU) in accordance with an embodiment of the present invention. Referring to Fig. 7, the case where the location of the compilation block is changed when the prediction target block (PU, PU7 and PU8) is placed on the lower limit of the LCU is shown. If the prediction target block (PU4, PU7 and PU8) is located on the lower limit of the LCU, then the location of the compiled block can be set so that information related to motion prediction can be derived even without searching for an LCU positioned below a current LCU. For example, a temporal prediction motion vector can be derived by using the third composite block and not the first composite block of the prediction target block. 29 1) If one only limits the right boundary of the LCU and depends on the Availability, then the first composite block and the fifth composite block are determined sequentially as a composite block to derive an entemporal prediction motion vector. 2) If one limits the lower limit of the LCU and depends on the availability, then the third composite block and the fifth composite block are determined sequentially as a composite block to derive the entemporal prediction motion vector. That is, according to an embodiment of the present invention, the pixel positions for specifying a temporal candidate block where the lower end of the target block adjacent to the LCU boundary may differ from the pixel positions for specifying a temporal candidate block when the the lower end of the current target block does not border the LCU limit. Referring back to Fig. 4, based on the compiled block determined by the method described above in connection with Figs. 5 - 7, a motion prediction vector (mvLXCol) for a compiled block and availability information for compiled is derived (S420). block (availableFlagLXCol).
Availability information for the compiled block (availableFlagLXCol) and motion vector (mvLXCol) to be used for interpreting prediction target blocks based on information for the compiled block determined by processes shown in Figs. 5 - 7, can be derived using the following method: 1) If the composite block (colPu) is encoded based on an intrapredictive mode, if the composite block (colPu) is not available, if the composite image (colPic) is not available to predict a temporal prediction motion vector, or if interpretation is performed without a temporal prediction motion vector is used, then availability information for the composite block (availableFlagLXCol) and motion vector (mvLXCol) can be set to O. 2) In cases other than case 1 above, availability information for the composite block (availableFlagLXCol) and motion vector (mvLXCol) is derived using an indicator (PredFlagLO) and an indicator r (PredFlagL1), where (PredFlagLO) indicates whether the LO list has been used or not and (PredFlagL1) indicates whether the L1 list has been used or not.
First, if it has been determined that the interpredication is performed on the compiled block without using the list LO (the indicator (PredFlagLO) is equal to 0) then the motion prediction related information of the compiled block, such as col information, reldxCol information and listCol information can be set to L1 and l lvL1 [xPCol] [ypCol], RefldxL1 [xPCol] [ypCol] which are motion prediction-related information of the compiled block derived using the list L1 and availability information for the compiled block (availableFlagLXCol), can be set to 1 _ In other cases, if it has been determined that the interpredication has been performed on the compiled block using the list LO (the indicator (PredFlagLO) is equal to 1) then the prediction prediction-related information of the compiled block, such as col information, reldxCol information and listCol information can be set separately for the case where PredFlagL1 is equal to 1 and the availability information for it compiles da block (availableFlagLXCol), can be set to 1.
The derived mvLXCol is scaled (step S430).
To use mvLXCol derived in step S420 as a temporal prediction motion vector of the prediction target block, a derived mvLXCol value can be scaled based on distance information relative to a distance between the compiled image including the compiled block and the reference image of the composite block and a distance between the image including the prediction target block and the reference image referenced by the prediction target block. After the derived mvLXCol value is scaled, the temporal prediction motion vector can be derived. In the following and in accordance with an embodiment of the present invention, a method, such as merging or IVP, for performing an interpreting is described.
Fig. 8 is a flow chart illustrating an interpretation method that the user co-sorting mode in accordance with an embodiment of the present invention. Referring to Fig. 8, information related to motion prediction can be derived from a spatial sorting candidate (Step S1000).
The spatial co-sorting candidate can be derived from neighboring prediction units of a prediction target block. To derive the spatial sorting candidate, one can receive information regarding the width and height of the prediction unit, IVIER (motion estimation range), single / lCLFlag, and division location. Based on such input data, availability information (availableFlagN) according to the position of the spatial co-sort candidate, reference image information (refldxL0, refldxL1), list usage information (PredFlagLON, PredFlagL1 N) and motion vector information (mvLON, mvL1 N) can be derived here. A number of blocks adjacent to the prediction target block may be the spatial co-sorting candidates.
Fig. 9 schematically illustrates placement of spatial sorting candidates in accordance with an embodiment of the present invention. Referring to Fig. 9, if a location of a point of the prediction target block that is at the top and left is (xP, yP), a width of the prediction target block is PSW, and a height of the prediction target block is nPSH then the spatial co-sorting candidates can be a block A0 including the point (xP - 1, yP + nPSH), a block A1 including the point (xP - 1, yP + nPSH - l / linPuSize), a block B0 including the point (xP + nPSW, yP - 1 ), a block B1 including the point (xP + nPSW -l / linPuSize, yP - 1) and a block B2 including the point (xP - l / linPuSize, yP - 1). Referring again to Fig. 8, a reference image index value is derived from the temporal sorting candidate (step 81010).
Reference image index value of the temporal co-sorting candidate, as an index value of a composite image including the temporal co-sorting candidate (compiled block) can be derived by special conditions as below: The following conditions are optional and may vary. For example, in the case where a position of a single point of the prediction target block which is at the top and left is (xP, yP), a width of the prediction target block is nPSW, and a height of the prediction target block is PSH, when 1) there is a neighboring prediction unit of prediction target target to position (xP - 1, yP + nPSH - 1) (hereinafter referred to as the neighbor prediction unit corresponding to the derived reference image index), 2) the division index value of the neighbor prediction unit corresponding to the derived reference image index is 0, 3) the neighbor prediction unit corresponding to the derived reference image index is not a block on which prediction is performed with an intra-prediction mode, and 4) the prediction target block and the neighbor prediction unit corresponding to the derived reference image index do not belong to the same IVIER, the reference image index value of the temporal concordant candidate being the same value of the same value. to derived reference image index. If these conditions are not met, the reference image index value of the temporal sorting candidate is set to O.
The temporal co-sorting candidate block (compiled block) is determined and information related to motion prediction is derived from the compiled block (Step 81020).
According to an embodiment of the present invention, the temporal laser sorting candidate block (composite block) can be determined adaptively depending on the location of the prediction target block in the LCU so that the compilation block is included in the same LCU as the prediction target block. 1) If a right or lower bound of the prediction target block does not border the boundary of the largest coding unit (LCU), to determine an availability, then the first assembled block and the fifth assembled block can be used sequentially as a composite block for deriving a temporal motion vector. 2) If only the lower boundary of the prediction target block borders on the boundary belonging to the largest coding unit (LCU), to determine an availability, then the third composite block and the fifth composite block can be used sequentially as a composite block to derive a temporal motion vector. 3) If both the right and the lower boundaries of the prediction target block are adjacent to the boundary of the largest coding unit (LCU), to determine an availability, then the fourth composite block and the fifth compilation block can be used sequentially as a composite block to derive a temporal motion vector. 4) If only the right boundary of the prediction target block borders on the boundary belonging to the largest coding unit (LCU), to determine an availability, then the second composite block and the fifth composite block can be used sequentially as a composite block to derive a temporal motion vector.
According to an embodiment of the present invention, the method can be used which enables a composite block to be determined adaptively and otherwise depending on the location of the prediction target block in the largest coding unit (LCU) which is in a position included in an LCU together with the prediction target block, or the composite block that is not included in an LCU together with the prediction target block and may not be used. 34 As described above, as a method of producing a composite block, the method used is to categorize features of a prediction target block as described above depending on the location of the prediction target block and the LCU boundary, and selecting a block to be used as a compilation block depending on the categorized location of the prediction target block. It is preferred that the first composite block and the fifth composite block be used sequentially as a composite block to derive a temporal motion vector. Whether the first composite block is available (eg, whether the lower boundary of the prediction target block is adjacent to the LCU) is derived and then a composite block other than the first composite block is determined to be a composite block to derive a temporal motion vector. .
The list of co-sorting candidates is configured (Step 81030).
The list of co-sorting candidates can be designed to have at least one spatial co-sorting candidate and one temporal co-sorting candidate. The spatial co-sorting candidates and the temporal co-sorting candidates included in the list of co-sorting candidates can be arranged with a predetermined priority.
The list of co-sorting candidates can be constructed to have a fixed number of co-sorting candidates and if the number of co-sorting candidates is smaller then the fixed number is combined with information related to business prediction owned by co-sorting candidates to create co-sorting candidates or zero vectors are generated as co-sorting candidates.
Fig. 10 is a flow chart illustrating a prediction method using A1 / IVP in accordance with an embodiment of the present invention. Referring to Fig. 10, information related to motion prediction is derived from spatial A1 / IVP candidate blocks. (Step 81200).
To derive the reference image index information and a prediction motion vector from the prediction target block, spatial Al / IVP candidate blocks can be derived from neighboring prediction blocks of the prediction target blocks.
Referring to Fig. 9, one of the blocks A0 and A1 may be used as a first spatial Al / IVP candidate block and one of the blocks BO-B2 may be used as a second spatial Al / IVP candidate block, whereby the spatial Al / IVP the candidate blocks are derived. Information related to motion prediction is derived from a temporal Al / lVP candidate block (Step 81210).
According to an embodiment of the present invention, the composite block can be determined adaptively depending on the location of the prediction target block in the LCU so that the composite block is included in the same LCU as the prediction target block. 1) If a right or lower boundary of the prediction target block does not border on the boundary of the largest coding unit (LCU), then the first composite block and the fifth composite block can be used sequentially as a composite block to derive a temporal motion vector with an availability check. 2) If only the lower limit of the prediction target block borders on the boundary belonging to the largest coding unit (LCU), then the third compilation block and the fifth compilation block can be used sequentially as a composite block to derive a temporal motion vector with an availability control. 3) If both the right and lower boundaries of the prediction target block are adjacent to the boundary of the largest coding unit (LCU), then the fourth 36 composite block and the fifth composite block can be used sequentially as a composite block to derive a temporal motion vector with unavailability control. . 4) If only the right boundary of the prediction target block borders on belonging to the largest coding unit (LCU), then the second composite block and the fifth composite block can be used sequentially as a composite block to derive a temporal motion vector with an availability control.
According to an embodiment of the present invention, the method that the composite block is not included in the same LCU together with the prediction target block may not be used, and the method may be used in which a composite block is determined adaptively depending on the location of the prediction target block in the largest coding unit (LCU). a position included in an LCU along with the prediction target block. in step 1200 where the spatial AlVlVP candidate blocks are derived and when the first spatial AlVlVP candidate block and the second spatial Al / lVP candidate block are determined as available and the derived motion prediction vector values are not identical then step S1210 is not derived where prediction motion vector is performed.
The list of Al / IVP candidates is created (Step S1220).
The list of AlVlVP candidates is created using information related to movement prediction derived from at least one of the steps S1200 and S1210. If the same information related to motion prediction is found in the created list of AlVlVP candidates, then a value among the identical information related to motion prediction can be used as an AlVlVP candidate value. The information related to motion prediction that is included in the list of AlVlVP candidates can only include a fixed number of candidate values. Although the embodiments of the present invention have been described so far, it will be understood by those skilled in the art that the present invention may be modified and varied in various ways without departing from the spirit of the present invention or without departing from the scope of the present invention.
权利要求:
Claims (4)
[1]
A method of decoding a video signal comprising: obtaining a spatial prediction motion vector of a current block by means of a motion vector of a spatially adjacent block adjacent to the current block; determining a composite block in a composite image based on whether a lower limit of the current block is adjacent to a lower limit of one largest encoder; obtaining a temporal prediction motion vector of the current block by means of a motion vector of the assembled block; generating a list of motion vector candidates for the current block, the list of motion vector candidates including the spatial prediction motion vector and the temporal prediction motion vector; and performing an interpretation of the current block based on the generated list of motion vector candidates.
[2]
The method according to claim 1, wherein the spatially adjacent block is a follower: a left adjacent block, a left and lower adjacent block, an upper adjacent block, an upper and right adjacent block and an upper and left adjacent block.
[3]
An apparatus for decoding a video signal and comprising: an interpreting unit configured to obtain a spatial prediction motion vector of a current block by means of a motion vector of a spatial tangent block adjacent to the current block, to determine a composite block in a composite image based on whether a lower limit of the current block is adjacent to a lower limit of a largest coding unit, to obtain a temporal prediction motion vector of the current block by means of a motion vector of the assembled block, to generate a list of motion vector candidates for the current block, the list of motion vector candidates including the spatial prediction motion vector and the temporal prediction motion vector, and to perform an interpredication of the current block based on the generated list of motion vector candidates.
[4]
The method according to claim 3, wherein the spatially adjacent block is a follower: a left adjacent block, a left and lower adjacent block, an upper adjacent block, an upper and right adjacent block and an upper and left adjacent block.
类似技术:
公开号 | 公开日 | 专利标题
SE1651148A1|2016-08-25|A method for deriving a temporal prediction motion vector and a device using the method
CN107094259B|2020-10-13|Method for decoding video signal
AU2016216724B2|2018-10-25|Method for deriving a temporal predictive motion vector, and apparatus using the method
AU2015200360B2|2017-01-12|Method for deriving a temporal predictive motion vector, and apparatus using the method
同族专利:
公开号 | 公开日
JP6062508B2|2017-01-18|
GB201321881D0|2014-01-22|
CN107483928B|2020-05-12|
EP3179723A1|2017-06-14|
GB2562129B|2019-04-03|
CN107396098B|2020-03-06|
RU2016133570A3|2020-01-30|
CN107483929B|2020-05-12|
RU2636117C1|2017-11-20|
ES2487993B1|2015-09-29|
CN107483927B|2020-06-05|
GB2559227A|2018-08-01|
CN107580221A|2018-01-12|
GB2559226B|2018-11-28|
EP2672710A4|2016-08-03|
RU2013140661A|2015-10-20|
US20210044832A1|2021-02-11|
CN107580219A|2018-01-12|
JP5745693B2|2015-07-08|
GB201716273D0|2017-11-22|
CN106170088A|2016-11-30|
CN104349164A|2015-02-11|
RU2716229C2|2020-03-06|
RU2016133566A3|2020-01-30|
RU2016133566A|2018-12-10|
CN107580220B|2020-06-19|
SE538787C2|2016-11-22|
GB201809871D0|2018-08-01|
GB2562130B|2019-04-03|
GB2562130A|2018-11-07|
CN107404651B|2020-03-06|
GB201716275D0|2017-11-22|
GB2559445A8|2018-10-03|
CN107635140A|2018-01-26|
RU2622849C1|2017-06-20|
JP5993484B2|2016-09-14|
CN104349164B|2018-02-02|
MX338137B|2016-04-05|
CN107483929A|2017-12-15|
CN107580220A|2018-01-12|
CN107592529A|2018-01-16|
JP2016007039A|2016-01-14|
GB201716265D0|2017-11-22|
SE1351475A1|2014-06-09|
CN107483926A|2017-12-15|
GB2559445B|2018-11-28|
GB2559226A|2018-08-01|
CN107592528B|2020-05-12|
RU2016133579A3|2020-01-30|
US20150208093A1|2015-07-23|
ES2487993R1|2015-01-27|
GB201716270D0|2017-11-22|
JP6062480B2|2017-01-18|
CA2937483A1|2013-03-14|
GB2562635B|2019-04-03|
MX351052B|2017-09-29|
RU2636118C1|2017-11-20|
RU2016133580A3|2020-01-30|
GB2508739B|2018-12-05|
JP2018139422A|2018-09-06|
CN107580221B|2020-12-08|
JP2015167373A|2015-09-24|
CN103430550A|2013-12-04|
AU2012305071B2|2014-12-11|
CN107580218B|2020-05-12|
SE1651147A1|2016-08-25|
JP2014520477A|2014-08-21|
GB201716271D0|2017-11-22|
CN107580218A|2018-01-12|
CN107396099A|2017-11-24|
CN107592527B|2020-05-12|
CN107483927A|2017-12-15|
RU2016133580A|2018-12-10|
ES2487993A2|2014-08-25|
CN107592527A|2018-01-16|
CN107580219B|2020-12-08|
RU2600547C2|2016-10-20|
CN107592529B|2020-05-12|
WO2013036041A3|2013-05-02|
WO2013036041A2|2013-03-14|
GB2559445A|2018-08-08|
SE538057C2|2016-02-23|
JP6619045B2|2019-12-11|
CN107483926B|2020-06-05|
RU2016133579A|2018-12-10|
US10523967B2|2019-12-31|
RU2716231C2|2020-03-06|
CN107592528A|2018-01-16|
RU2635235C1|2017-11-09|
CN107483925B|2020-06-19|
PL231159B1|2019-01-31|
CN107396098A|2017-11-24|
CN107635140B|2020-12-08|
JP2017098967A|2017-06-01|
RU2716230C2|2020-03-06|
JP2015167372A|2015-09-24|
CA2829114A1|2013-03-14|
MX2013010155A|2013-12-02|
JP2016007040A|2016-01-14|
GB2559227B|2018-11-28|
AU2012305071A8|2013-10-31|
MX351041B|2017-09-29|
JP6062507B2|2017-01-18|
US20200084475A1|2020-03-12|
CA2937483C|2017-09-05|
RU2646384C1|2018-03-02|
CN107483928A|2017-12-15|
US10805639B2|2020-10-13|
CN107396099B|2020-03-06|
US20130343461A1|2013-12-26|
GB2562635A|2018-11-21|
GB2508739A|2014-06-11|
RU2016133570A|2018-12-10|
EP2672710A2|2013-12-11|
JP6322693B2|2018-05-09|
RU2716563C2|2020-03-12|
SE1551592A1|2015-12-03|
AU2012305071A1|2013-10-03|
CN103430550B|2017-10-13|
CN107404651A|2017-11-28|
GB2562129A|2018-11-07|
SE1651149A1|2016-08-25|
PL407911A1|2015-05-25|
US11089333B2|2021-08-10|
CA2829114C|2016-11-22|
CN106170088B|2019-02-19|
CN107483925A|2017-12-15|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

RU2003174C1|1990-01-22|1993-11-15|Санкт-Петербургский государственный электротехнический университет им.В.И.Уль нова |Method of pattern recognition|
US5508744A|1993-03-12|1996-04-16|Thomson Consumer Electronics, Inc.|Video signal compression with removal of non-correlated motion vectors|
JP3640318B2|1995-09-01|2005-04-20|インターナショナル・ビジネス・マシーンズ・コーポレーション|Digital image encoding method and system|
US20040042552A1|2000-07-20|2004-03-04|Dvorkovich Victor Pavlovich|Method and apparatus for determining motion vectors in dynamic images|
RU2182727C2|2000-07-20|2002-05-20|Дворкович Александр Викторович|Method for searching motion vectors of parts in amplitude images|
US20030099294A1|2001-11-27|2003-05-29|Limin Wang|Picture level adaptive frame/field coding for digital video content|
KR100446083B1|2002-01-02|2004-08-30|삼성전자주식회사|Apparatus for motion estimation and mode decision and method thereof|
WO2003102872A2|2002-05-30|2003-12-11|Koninklijke Philips Electronics N.V.|Unit for and method of estimating a motion vector|
US7970058B2|2002-07-15|2011-06-28|Hitachi Consumer Electronics Co., Ltd.|Moving picture encoding method and decoding method|
KR100865034B1|2002-07-18|2008-10-23|엘지전자 주식회사|Method for predicting motion vector|
MXPA05003464A|2002-10-01|2005-07-05|Thomson Licensing Sa|Implicit weighting of reference pictures in a video encoder.|
US7801217B2|2002-10-01|2010-09-21|Thomson Licensing|Implicit weighting of reference pictures in a video encoder|
US7408986B2|2003-06-13|2008-08-05|Microsoft Corporation|Increasing motion smoothness using frame interpolation with motion analysis|
US20040258154A1|2003-06-19|2004-12-23|Microsoft Corporation|System and method for multi-stage predictive motion estimation|
US7609763B2|2003-07-18|2009-10-27|Microsoft Corporation|Advanced bi-directional predictive coding of video frames|
WO2005020583A1|2003-08-22|2005-03-03|Koninklijke Philips Electronics, N.V.|Joint spatial-temporal-orientation-scale prediction and coding of motion vectors for rate-distortion-complexity optimized video coding|
US8064520B2|2003-09-07|2011-11-22|Microsoft Corporation|Advanced bi-directional predictive coding of interlaced video|
MXPA06003925A|2003-10-09|2006-07-05|Thomson Licensing|Direct mode derivation process for error concealment.|
KR100597402B1|2003-12-01|2006-07-06|삼성전자주식회사|Method for scalable video coding and decoding, and apparatus for the same|
CA2547891C|2003-12-01|2014-08-12|Samsung Electronics Co., Ltd.|Method and apparatus for scalable video encoding and decoding|
KR100596706B1|2003-12-01|2006-07-04|삼성전자주식회사|Method for scalable video coding and decoding, and apparatus for the same|
JP3879741B2|2004-02-25|2007-02-14|ソニー株式会社|Image information encoding apparatus and image information encoding method|
KR100694059B1|2004-09-30|2007-03-12|삼성전자주식회사|Method and apparatus for encoding and decoding in inter mode based on multi time scan|
KR100888962B1|2004-12-06|2009-03-17|엘지전자 주식회사|Method for encoding and decoding video signal|
KR100703770B1|2005-03-25|2007-04-06|삼성전자주식회사|Video coding and decoding using weighted prediction, and apparatus for the same|
JP2008538433A|2005-04-12|2008-10-23|コーニンクレッカフィリップスエレクトロニクスエヌヴィ|Video processing using region-based multipath motion estimation and temporal motion vector candidate update|
KR100896279B1|2005-04-15|2009-05-07|엘지전자 주식회사|Method for scalably encoding and decoding video signal|
JP2006303734A|2005-04-18|2006-11-02|Victor Co Of Japan Ltd|Moving picture encoder and program thereof|
KR20060123939A|2005-05-30|2006-12-05|삼성전자주식회사|Method and apparatus for encoding and decoding video|
CN101090491B|2006-06-16|2016-05-18|香港科技大学|Be used for the block-based motion estimation algorithm of the enhancing of video compress|
US8761258B2|2005-06-17|2014-06-24|The Hong Kong University Of Science And Technology|Enhanced block-based motion estimation algorithms for video compression|
CN101253777A|2005-07-01|2008-08-27|极速决件公司|Method, apparatus and system for use in multimedia signal encoding|
KR20070038396A|2005-10-05|2007-04-10|엘지전자 주식회사|Method for encoding and decoding video signal|
KR100728031B1|2006-01-23|2007-06-14|삼성전자주식회사|Method and apparatus for deciding encoding mode for variable block size motion estimation|
US20100091845A1|2006-03-30|2010-04-15|Byeong Moon Jeon|Method and apparatus for decoding/encoding a video signal|
JP5055354B2|2006-03-30|2012-10-24|エルジーエレクトロニクスインコーポレイティド|Video signal decoding / encoding method and apparatus|
CN101455084A|2006-03-30|2009-06-10|Lg电子株式会社|A method and apparatus for decoding/encoding a video signal|
EP2030450B1|2006-06-19|2015-01-07|LG Electronics Inc.|Method and apparatus for processing a video signal|
CN101102503A|2006-07-07|2008-01-09|华为技术有限公司|Prediction method for motion vector between video coding layers|
TW200820791A|2006-08-25|2008-05-01|Lg Electronics Inc|A method and apparatus for decoding/encoding a video signal|
US20080101474A1|2006-11-01|2008-05-01|Yi-Jen Chiu|Optimizing the storage and reducing the computation of reference picture list processing in video decoding|
CN101222638B|2007-01-08|2011-12-07|华为技术有限公司|Multi-video encoding and decoding method and device|
CN101669367A|2007-03-02|2010-03-10|Lg电子株式会社|A method and an apparatus for decoding/encoding a video signal|
EP2149262A4|2007-04-25|2010-09-01|Lg Electronics Inc|A method and an apparatus for decoding/encoding a video signal|
CN101119493B|2007-08-30|2010-12-01|威盛电子股份有限公司|Coding method and device for block type digital coding image|
CA2701877A1|2007-10-15|2009-04-23|Nokia Corporation|Motion skip and single-loop encoding for multi-view video content|
CN101884219B|2007-10-16|2014-08-20|Lg电子株式会社|A method and an apparatus for processing a video signal|
US8908765B2|2007-11-15|2014-12-09|General Instrument Corporation|Method and apparatus for performing motion estimation|
US9641861B2|2008-01-25|2017-05-02|Mediatek Inc.|Method and integrated circuit for video processing|
CN101540902B|2008-03-20|2011-02-02|华为技术有限公司|Method and device for scaling motion vectors, and method and system for coding/decoding|
JP5681861B2|2008-03-31|2015-03-11|メディメトリクス ペルソナリズド ドルグ デリヴェリー ベー ヴェ|Method for making a swallowable capsule with a sensor|
JP2010011075A|2008-06-26|2010-01-14|Toshiba Corp|Method and apparatus for encoding and decoding moving image|
JP2010016454A|2008-07-01|2010-01-21|Sony Corp|Image encoding apparatus and method, image decoding apparatus and method, and program|
TWI374672B|2008-07-04|2012-10-11|Univ Nat Taiwan|Seamless wireless video transmission for multimedia applications|
CN102090065A|2008-07-10|2011-06-08|三菱电机株式会社|Image encoding device, image decoding device, image encoding method, and image decoding method|
US20100086051A1|2008-10-06|2010-04-08|Lg Electronics Inc.|Method and an apparatus for processing a video signal|
CN102361671B|2009-08-20|2013-08-21|金落成|Swing in which the seat is tilted back and forth while being sat upon|
CN101540926B|2009-04-15|2010-10-27|南京大学|Stereo video coding-decoding method based on H.264|
CN102883160B|2009-06-26|2016-06-29|华为技术有限公司|Video image motion information getting method, device and equipment, template construction method|
KR101456498B1|2009-08-14|2014-10-31|삼성전자주식회사|Method and apparatus for video encoding considering scanning order of coding units with hierarchical structure, and method and apparatus for video decoding considering scanning order of coding units with hierarchical structure|
JP2011097572A|2009-09-29|2011-05-12|Canon Inc|Moving image-encoding device|
WO2011070730A1|2009-12-07|2011-06-16|日本電気株式会社|Video coding device and video decoding device|
CN101860754B|2009-12-16|2013-11-13|香港应用科技研究院有限公司|Method and device for coding and decoding motion vector|
KR20110068792A|2009-12-16|2011-06-22|한국전자통신연구원|Adaptive image coding apparatus and method|
JP2011151775A|2009-12-22|2011-08-04|Jvc Kenwood Holdings Inc|Image decoder, method of decoding image, and image decoding program|
KR101522850B1|2010-01-14|2015-05-26|삼성전자주식회사|Method and apparatus for encoding/decoding motion vector|
CN102131094A|2010-01-18|2011-07-20|联发科技股份有限公司|Motion prediction method|
US9036692B2|2010-01-18|2015-05-19|Mediatek Inc.|Motion prediction method|
US20120300850A1|2010-02-02|2012-11-29|Alex Chungku Yie|Image encoding/decoding apparatus and method|
KR20120086232A|2011-01-25|2012-08-02|휴맥스|Method for encoding/decoding video for rate-distortion optimization and apparatus for performing the same|
CN103250412A|2010-02-02|2013-08-14|数码士有限公司|Image encoding/decoding method for rate-istortion optimization and apparatus for performing same|
KR101495724B1|2010-02-02|2015-02-25|삼성전자주식회사|Method and apparatus for video encoding based on scanning order of hierarchical data units, and method and apparatus for video decoding based on the same|
DK2947877T3|2010-04-23|2017-01-23|M&K Holdings Inc|Apparatus for encoding an image|
HUE043816T2|2010-05-04|2019-09-30|Lg Electronics Inc|Method and apparatus for processing a video signal|
US9124898B2|2010-07-12|2015-09-01|Mediatek Inc.|Method and apparatus of temporal motion vector prediction|
PL2613535T3|2010-09-02|2021-10-25|Lg Electronics Inc.|Method for encoding and decoding video|
KR101033243B1|2010-11-17|2011-05-06|엘아이지넥스원 주식회사|Object tracking method and apparatus|
US8824558B2|2010-11-23|2014-09-02|Mediatek Inc.|Method and apparatus of spatial motion vector prediction|
US8711940B2|2010-11-29|2014-04-29|Mediatek Inc.|Method and apparatus of motion vector prediction with extended motion vector predictor|
US9137544B2|2010-11-29|2015-09-15|Mediatek Inc.|Method and apparatus for derivation of mv/mvp candidate for inter/skip/merge modes|
US10142630B2|2010-12-10|2018-11-27|Texas Instruments Incorporated|Mode adaptive intra prediction smoothing in video coding|
JP2014501091A|2010-12-17|2014-01-16|エレクトロニクスアンドテレコミュニケーションズリサーチインスチチュート|Inter prediction method and apparatus|
WO2012081949A2|2010-12-17|2012-06-21|한국전자통신연구원|Method and apparatus for inter prediction|
JP6057165B2|2011-01-25|2017-01-11|サン パテント トラスト|Video decoding method|
US9319716B2|2011-01-27|2016-04-19|Qualcomm Incorporated|Performing motion vector prediction for video coding|
MX2013010231A|2011-04-12|2013-10-25|Panasonic Corp|Motion-video encoding method, motion-video encoding apparatus, motion-video decoding method, motion-video decoding apparatus, and motion-video encoding/decoding apparatus.|
US9247266B2|2011-04-18|2016-01-26|Texas Instruments Incorporated|Temporal motion data candidate derivation in video coding|
RU2622849C1|2011-09-09|2017-06-20|Кт Корпорейшен|Method and device for decoding video signal|
US9392235B2|2011-11-18|2016-07-12|Google Technology Holdings LLC|Explicit way for signaling a collocated reference picture for video coding|
CN103959791B|2011-11-18|2017-07-04|谷歌技术控股有限责任公司|The explicit way of juxtaposition picture is sent with signal for high efficiency video code |
WO2013078248A1|2011-11-21|2013-05-30|General Instrument Corporation|Implicit determination and combined implicit and explicit determination of collocated picture for temporal prediction|
US9525861B2|2012-03-14|2016-12-20|Qualcomm Incorporated|Disparity vector prediction in video coding|
US9445076B2|2012-03-14|2016-09-13|Qualcomm Incorporated|Disparity vector construction method for 3D-HEVC|
US9503720B2|2012-03-16|2016-11-22|Qualcomm Incorporated|Motion vector coding and bi-prediction in HEVC and its extensions|
US10200709B2|2012-03-16|2019-02-05|Qualcomm Incorporated|High-level syntax extensions for high efficiency video coding|
WO2013154673A1|2012-04-11|2013-10-17|Motorola Mobility Llc|Signaling of temporal motion vector predictor flag for temporal prediction|
US9549177B2|2012-04-11|2017-01-17|Google Technology Holdings LLC|Evaluation of signaling of collocated reference picture for temporal prediction|
US9325990B2|2012-07-09|2016-04-26|Qualcomm Incorporated|Temporal motion vector prediction in video coding extensions|JP3044722B2|1989-08-23|2000-05-22|凸版印刷株式会社|Thermal transfer ribbon|
JP2014501091A|2010-12-17|2014-01-16|エレクトロニクスアンドテレコミュニケーションズリサーチインスチチュート|Inter prediction method and apparatus|
RU2622849C1|2011-09-09|2017-06-20|Кт Корпорейшен|Method and device for decoding video signal|
GB2556695B|2011-09-23|2018-11-14|Kt Corp|Method for inducing a merge candidate block and device using the same|
US9894357B2|2013-07-30|2018-02-13|Kt Corporation|Image encoding and decoding method supporting plurality of layers and apparatus using same|
WO2015016536A1|2013-07-30|2015-02-05|주식회사 케이티|Image encoding and decoding method supporting plurality of layers and apparatus using same|
CN105453562B|2013-07-30|2018-12-25|株式会社Kt|Support multiple layers of image coding and decoding method and the device using this method|
WO2017008255A1|2015-07-14|2017-01-19|Mediatek Singapore Pte. Ltd.|Advanced intra prediction mode signaling in video coding|
US10313765B2|2015-09-04|2019-06-04|At&T Intellectual Property I, L.P.|Selective communication of a vector graphics format version of a video content item|
WO2017048008A1|2015-09-17|2017-03-23|엘지전자 주식회사|Inter-prediction method and apparatus in video coding system|
CN105681788B|2016-04-07|2019-05-03|中星技术股份有限公司|Method for video coding and video coding apparatus|
US10116957B2|2016-09-15|2018-10-30|Google Inc.|Dual filter type for motion compensated prediction in video coding|
KR20180031616A|2016-09-20|2018-03-28|주식회사 케이티|Method and apparatus for processing a video signal|
WO2018128223A1|2017-01-03|2018-07-12|엘지전자 주식회사|Inter-prediction method and apparatus in image coding system|
EP3383043A1|2017-03-27|2018-10-03|Thomson Licensing|Methods and apparatus for picture encoding and decoding|
JP6867870B2|2017-05-18|2021-05-12|スタンレー電気株式会社|Vehicle lighting|
KR20190110960A|2018-03-21|2019-10-01|한국전자통신연구원|Method and apparatus for encoding/decoding image, recording medium for stroing bitstream|
CN113840148A|2020-06-24|2021-12-24|Oppo广东移动通信有限公司|Inter-frame prediction method, encoder, decoder, and computer storage medium|
法律状态:
优先权:
申请号 | 申请日 | 专利标题
KR20110091782|2011-09-09|
KR1020120039501A|KR101204026B1|2011-09-09|2012-04-17|Methods of derivation of temporal motion vector predictor and apparatuses for using the same|
PCT/KR2012/007174|WO2013036041A2|2011-09-09|2012-09-06|Method for deriving a temporal predictive motion vector, and apparatus using the method|
[返回顶部]